explanatory information
MMXU: A Multi-Modal and Multi-X-ray Understanding Dataset for Disease Progression
Mu, Linjie, Huang, Zhongzhen, Qin, Shengqian, Zhu, Yakun, Zhang, Shaoting, Zhang, Xiaofan
Large vision-language models (LVLMs) have shown great promise in medical applications, particularly in visual question answering (MedVQA) and diagnosis from medical images. However, existing datasets and models often fail to consider critical aspects of medical diagnostics, such as the integration of historical records and the analysis of disease progression over time. In this paper, we introduce MMXU (Multimodal and MultiX-ray Understanding), a novel dataset for MedVQA that focuses on identifying changes in specific regions between two patient visits. Unlike previous datasets that primarily address single-image questions, MMXU enables multi-image questions, incorporating both current and historical patient data. We demonstrate the limitations of current LVLMs in identifying disease progression on MMXU-\textit{test}, even those that perform well on traditional benchmarks. To address this, we propose a MedRecord-Augmented Generation (MAG) approach, incorporating both global and regional historical records. Our experiments show that integrating historical records significantly enhances diagnostic accuracy by at least 20\%, bridging the gap between current LVLMs and human expert performance. Additionally, we fine-tune models with MAG on MMXU-\textit{dev}, which demonstrates notable improvements. We hope this work could illuminate the avenue of advancing the use of LVLMs in medical diagnostics by emphasizing the importance of historical context in interpreting medical images. Our dataset is released at \href{https://github.com/linjiemu/MMXU}{https://github.com/linjiemu/MMXU}.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (2 more...)
Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap
Tekkesinoglu, Sule, Habibovic, Azra, Kunze, Lars
Given the uncertainty surrounding how existing explainability methods for autonomous vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is imperative to determine the contexts requiring explanations and suitable interaction strategies. A comprehensive review becomes crucial to assess the alignment of current approaches with the varied interests and expectations within the AV ecosystem. This study presents a review to discuss the complexities associated with explanation generation and presentation to facilitate the development of more effective and inclusive explainable AV systems. Our investigation led to categorising existing literature into three primary topics: explanatory tasks, explanatory information, and explanatory information communication. Drawing upon our insights, we have proposed a comprehensive roadmap for future research centred on (i) knowing the interlocutor, (ii) generating timely explanations, (ii) communicating human-friendly explanations, and (iv) continuous learning. Our roadmap is underpinned by principles of responsible research and innovation, emphasising the significance of diverse explanation requirements. To effectively tackle the challenges associated with implementing explainable AV systems, we have delineated various research directions, including the development of privacy-preserving data integration, ethical frameworks, real-time analytics, human-centric interaction design, and enhanced cross-disciplinary collaborations. By exploring these research directions, the study aims to guide the development and deployment of explainable AVs, informed by a holistic understanding of user needs, technological advancements, regulatory compliance, and ethical considerations, thereby ensuring safer and more trustworthy autonomous driving experiences.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Oceania > New Zealand (0.04)
- (12 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Transportation > Ground > Road (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (6 more...)
- Information Technology > Human Computer Interaction > Interfaces (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (1.00)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Within the field of Requirements Engineering (RE), the increasing significance of Explainable Artificial Intelligence (XAI) in aligning AI-supported systems with user needs, societal expectations, and regulatory standards has garnered recognition. In general, explainability has emerged as an important non-functional requirement that impacts system quality. However, the supposed trade-off between explainability and performance challenges the presumed positive influence of explainability. If meeting the requirement of explainability entails a reduction in system performance, then careful consideration must be given to which of these quality aspects takes precedence and how to compromise between them. In this paper, we critically examine the alleged trade-off. We argue that it is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk. By providing a foundation for future research and best practices, this work aims to advance the field of RE for AI.
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (6 more...)
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support
In this paper, we argue for a paradigm shift from the current model of explainable artificial intelligence (XAI), which may be counter-productive to better human decision making. In early decision support systems, we assumed that we could give people recommendations and that they would consider them, and then follow them when required. However, research found that people often ignore recommendations because they do not trust them; or perhaps even worse, people follow them blindly, even when the recommendations are wrong. Explainable artificial intelligence mitigates this by helping people to understand how and why models give certain recommendations. However, recent research shows that people do not always engage with explainability tools enough to help improve decision making. The assumption that people will engage with recommendations and explanations has proven to be unfounded. We argue this is because we have failed to account for two things. First, recommendations (and their explanations) take control from human decision makers, limiting their agency. Second, giving recommendations and explanations does not align with the cognitive processes employed by people making decisions. This position paper proposes a new conceptual framework called Evaluative AI for explainable decision support. This is a machine-in-the-loop paradigm in which decision support tools provide evidence for and against decisions made by people, rather than provide recommendations to accept or reject. We argue that this mitigates issues of over- and under-reliance on decision support tools, and better leverages human expertise in decision making.
- North America > United States > New York > New York County > New York City (0.14)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Indiana (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Oncology (0.46)
- Health & Medicine > Therapeutic Area > Dermatology (0.46)
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Langer, Markus, Oster, Daniel, Speith, Timo, Hermanns, Holger, Kästner, Lena, Schmidt, Eva, Sesing, Andreas, Baum, Kevin
Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these stakeholders' desiderata) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > Germany > Saarland > Saarbrücken (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (6 more...)
- Research Report (1.00)
- Overview (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (0.92)
- Education (0.92)
- (2 more...)
Explaining reputation assessments
Nunes, Ingrid, Taylor, Phillip, Barakat, Lina, Griffiths, Nathan, Miles, Simon
Reputation is crucial to enabling human or software agents to select among alternative providers. Although several effective reputation assessment methods exist, they typically distil reputation into a numerical representation, with no accompanying explanation of the rationale behind the assessment. Such explanations would allow users or clients to make a richer assessment of providers, and tailor selection according to their preferences and current context. In this paper, we propose an approach to explain the rationale behind assessments from quantitative reputation models, by generating arguments that are combined to form explanations. Our approach adapts, extends and combines existing approaches for explaining decisions made using multi-attribute decision models in the context of reputation. We present example argument templates, and describe how to select their parameters using explanation algorithms. Our proposal was evaluated by means of a user study, which followed an existing protocol. Our results give evidence that although explanations present a subset of the information of trust scores, they are sufficient to equally evaluate providers recommended based on their trust score. Moreover, when explanation arguments reveal implicit model information, they are less persuasive than scores.
- North America > United States > New York > New York County > New York City (0.04)
- South America > Brazil > Rio Grande do Sul > Porto Alegre (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
What are the Data Requirements for AI in Manufacturing? - Advanced Manufacturing
At the core of today's state-of-the-art Artificial Intelligence (AI) algorithms is the ability to learn complex patterns from a sample of data. In the manufacturing context, an example of a pattern might be the ways in which a set of parameters contained in that data, which are related to a process in a factory, vary together. When considering AI, it's important to understand what the data requirements are at the outset. The algorithm learns the patterns by being shown many examples of the parameter values in question--typically between a few thousand and several million. This data sample is a representation of the history of the factory process.
Considering AI? Understand what data you need at the outset - DataProphet
At the core of today's state-of-the-art Artificial Intelligence (AI) algorithms is the ability to learn patterns from a sample of data. In the manufacturing context, an example of such a pattern might be the ways in which a set of parameters contained in that data, which are related to a process in a factory, vary together. When considering using AI, it is important to understand what the data requirements are. The general answer as to what constitutes the "right" data for AI-enabled process optimisation, is the set of data that is sufficient to describe how changes to a process's parameters affect quality. The bulk of process data can generally be represented as a table, or a collection of tables, comprising columns (parameters) and rows (production examples, representing, say, one production batch per row).